68 research outputs found

    AN EFFICIENT SEGMENTATION ALGORITHM FOR ENTITY INTERACTION

    Get PDF
    The inventorying of biological diversity and studies in biocomplexity require the management of large electronic datasets of organisms. While species inventory has adopted structured electronic databases for some time, the computer modelling of the functional interactions between biological entities at all levels of life is still in the stage of development. One of the challenges for this type of modelling is the biotic interactions that occur between large datasets of entities represented as computer algorithms. In real-time simulation that models the biotic interactions of large population datasets, the use of computational processing time could be extensive. One way of increasing the efficiency of such simulation is to partition the landscape so that entities need only traverse its local space for entities that falls within the interaction proximity. This article presents an efficient segmentation algorithm for biotic interactions for research related to the modelling and simulation of biological systems

    Social information landscapes: automated mapping of large multimodal, longitudinal social networks

    Get PDF
    Purpose – This article presents a Big Data solution as a methodological approach to the automated collection, cleaning, collation and mapping of multimodal, longitudinal datasets from social media. The article constructs Social Information Landscapes. Design/methodology/approach – The research presented here adopts a Big Data methodological approach for mapping user-generated contents in social media. The methodology and algorithms presented are generic, and can be applied to diverse types of social media or user-generated contents involving user interactions, such as within blogs, comments in product pages and other forms of media, so long as a formal data structure proposed here can be constructed. Findings – The limited presentation of the sequential nature of content listings within social media and Web 2.0 pages, as viewed on Web browsers or on mobile devices, do not necessarily reveal nor make obvious an unknown nature of the medium; that every participant, from content producers, to consumers, to followers and subscribers, including the contents they produce or subscribed to, are intrinsically connected in a hidden but massive network. Such networks when mapped, could be quantitatively analysed using social network analysis (e.g., centralities), and the semantics and sentiments could equally reveal valuable information with appropriate analytics. Yet that which is difficult is the traditional approach of collecting, cleaning, collating and mapping such datasets into a sufficiently large sample of data that could yield important insights into the community structure and the directional, and polarity of interaction on diverse topics. This research solves this particular strand of problem. Research limitations/implications – The automated mapping of extremely large networks involving hundreds of thousands to millions of nodes, over a long period of time could possibly assist in the proving or even disproving of theories. The goal of this article is to demonstrate the feasibility of using automated approaches for acquiring massive, connected datasets for academic inquiry in the social sciences. Practical implications – The methods presented in this article, and the Big Data architecture presented here have great practical values to individuals and institutions which have low budgets. The software-hardward integrated architecture uses open source software, and the social information landscapes mapping algorithms are not difficult to implement. Originality/value – The majority of research in the literatures uses traditional approach for collecting social networks data. The traditional approach is slow, tedious and does not yield a large enough sample for the data to be significant for analysis. Whilst traditional approach collects only a small percentage of data, the original methods presented could possibility collect entire datasets in social media due to its scalability and automated mapping techniques

    The bottom-up formation and maintenance of a Twitter community: analysis of the #FreeJahar Twitter community

    Get PDF
    Purpose – The article explores the formation, maintenance and disintegration of a fringe Twitter community in order to understand if offline community structure applies to online communities Design/methodology/approach – The research adopted Big Data methodological approaches in tracking user-generated contents over a series of months and mapped online Twitter interactions as a multimodal, longitudinal ‘social information landscape’. Centrality measures were employed to gauge the importance of particular user nodes within the complete network and time-series analysis were used to track ego centralities in order to see if this particular online communities were maintained by specific egos. Findings – The case study shows that communities with distinct boundaries and memberships can form and exist within Twitter’s limited user content and sequential policies, which unlike other social media services, do not support formal groups, demonstrating the resilience of desperate online users when their ideology overcome social media limitations. Analysis in this article using social networks approaches also reveals that communities are formed and maintained from the bottom-up. Research limitations/implications – The research data is based on a particular dataset which occurred within a specific time and space. However, due to the rapid, polarising group behaviour, growth, disintegration and decline of the online community, the dataset presents a ‘laboratory’ case from which many other online community can be compared with. It is highly possible that the case can be generalised to a broader range of communities and from which online community theories can be proved/disproved. Practical implications – The article showed that particular group of egos with high activities, if removed, could entirely break the cohesiveness of the community. Conversely, strengthening such egos will reinforce the community strength. The questions mooted within the paper and the methodology outlined can potentially be applied in a variety of social science research areas. The contribution to the understanding of a complex social and political arena, as outlined in the paper, is a key example of such an application within an increasingly strategic research area - and this will surely be applied and developed further by the computer science and security community. Originality/value – The majority of researches that cover these domains have not focused on communities that are multimodal and longitudinal. This is mainly due to the challenges associated with the collection and analysis of continuous datasets that have high volume and velocity. Such datasets are therefore unexploited with regards to cyber-community research

    An improved system for sentence-level novelty detection in textual streams

    Get PDF
    Novelty detection in news events has long been a difficult problem. A number of models performed well on specific data streams but certain issues are far from being solved, particularly in large data streams from the WWW where unpredictability of new terms requires adaptation in the vector space model. We present a novel event detection system based on the Incremental Term Frequency-Inverse Document Frequency (TF-IDF) weighting incorporated with Locality Sensitive Hashing (LSH). Our system could efficiently and effectively adapt to the changes within the data streams of any new terms with continual updates to the vector space model. Regarding miss probability, our proposed novelty detection framework outperforms a recognised baseline system by approximately 16% when evaluating a benchmark dataset from Google News

    Crowdsourcing for 3D cultural heritage for George Town UNESCO World Heritage Site

    Get PDF
    The uniqueness of George Town as a complex living web of social, cultural and economic activities embedded within built environments presents a challenge for 3D digital documentation, and this even for organisations with large financial resources. This challenge presents a barrier when digital cultural products are needed to enhance site documentation and conservation, facilitate accessibility for academic studies and research, and fuel the creative economy, and many more benefits which usually accompany digitalisation activities globally. Whilst digital transformation may appear daunting, present technologies are sufficiently developed for quick adoption and use by both individuals and small organisations. This paper gave argument to the use of 3D technologies in combination with crowdsourcing mechanisms suited for the World Heritage Site and the benefits that should follow if George Town’s cultural heritage is digitised and subsequently digitalised

    Crowdsourcing for 3D cultural heritage for George Town UNESCO World Heritage Site

    Get PDF
    The uniqueness of George Town as a complex living web of social, cultural and economic activities embedded within built environments presents a challenge for 3D digital documentation, and this even for organisations with large financial resources. This challenge presents a barrier when digital cultural products are needed to enhance site documentation and conservation, facilitate accessibility for academic studies and research, and fuel the creative economy, and many more benefits which usually accompany digitalisation activities globally. Whilst digital transformation may appear daunting, present technologies are sufficiently developed for quick adoption and use by both individuals and small organisations. This paper gave argument to the use of 3D technologies in combination with crowdsourcing mechanisms suited for the World Heritage Site and the benefits that should follow if George Town’s cultural heritage is digitised and subsequently digitalised

    Macro and Micro Environment for Diversity of Behaviour in Artificial Life Simulation

    Get PDF

    A Photogrammetric Analysis of Cuneiform Tablets for the purpose of Digital Reconstruction

    Get PDF
    Despite the advances made in the recording and cataloguing of cuneiform tablets, there is still much work to be done in the field of cuneiform reconstruction. The processes employed to rebuild cuneiform fragments still rely on glue and putty, with manual matching of fragments from catalogues or individual collections. The reconstruction process is hindered by inadequate information about the size and shape of fragments, and the inaccessibility of the original fragments makes finding information difficult in some collections. Most catalogue data associated with cuneiform tablets concerns the content of the text, and not the physical appearance of complete or fragmented tablets. This paper shows how photogrammetric analysis of cuneiform tablets can be used to retrieve physical information directly from source materials without the risk of human error. An initial scan of 8000 images from the CDLI database has already revealed interesting new information about the tablets held in cuneiform archives, and offered new avenues for research within the cuneiform reconstruction process.IBM Visual and Spatial Technology Centre, Institute of Archaeology and Antiquity, University of Birmingham, Edgbaston, Birmingham, B15 2TT

    A comparison of the capacities of VR and 360-degree video for coordinating memory in the experience of cultural heritage

    Get PDF
    Virtual Reality (VR), a medium which can create alternate or representations of reality, could potentially be used for triggering memory recollections by connecting users with their past. Comparing to commonly-used media within museum such as photos and videos, VR is distinct because of its ability to move beyond the confines of time and space, by enabling users to be immersed in the reconstructed context and allowing them to take charge of the environment by interacting with objects, navigating the environment, and evolving the narratives. In this paper, we compared audience experiences of cultural heritage (CH) between 360-degree video recordings and Virtual Environments to investigate the capacity of these two types of media for coordinating the audience’s memory of the past. The findings will help guide the future design and evaluation of VR as a medium for communicating CH

    A comparison of the capacities of VR and 360-degree video for coordinating memory in the experience of cultural heritage

    Get PDF
    Virtual Reality (VR), a medium which can create alternate or representations of reality, could potentially be used for triggering memory recollections by connecting users with their past. Comparing to commonly-used media within museum such as photos and videos, VR is distinct because of its ability to move beyond the confines of time and space, by enabling users to be immersed in the reconstructed context and allowing them to take charge of the environment by interacting with objects, navigating the environment, and evolving the narratives. In this paper, we compared audience experiences of cultural heritage (CH) between 360-degree video recordings and Virtual Environments to investigate the capacity of these two types of media for coordinating the audience’s memory of the past. The findings will help guide the future design and evaluation of VR as a medium for communicating CH
    corecore